Members
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: Application Domains

Spoken dialog systems

A Spoken Dialogue System (SDS) is a system enabling human people to interact with machines through speech. In contrast with command-and-control systems or question-answering systems that react to a single utterances, SDS build a real interaction over time and try to achieve complex tasks (like hotel booking, appointment scheduling etc.) by gathering pieces of information through several turns of dialogue. To do so, besides the required speech and language processing modules (e.g. speech recognition and synthesis, language understanding and generation), there is a need for a dialogue management module that decides what to say in any situation so as to achieve the goal in the most natural and efficient way, recovering from speech processing errors in a seamless manner.

The dialogue management module is thus taking sequences of decisions to achieve a long-term goal in an unknown, noisy and hard to model environment (since it includes human users). For this reason, we work on machine learning techniques such as reinforcement and imitation learning to optimize this specific sequential decision making under uncertainty problem.

In addition to bring novel and efficient solutions to this problem, we are interested in the new challenges brought to our research in machine learning by this type of application. Indeed, having the human in the learning loop typically requires dealing with non-stationarity, data-efficiency, safety as well as cooperation and imitation.

We collaborate with companies such as Orange Labs on this topic and several projects are ongoing (ANR MaRDi, CHIST-ERA IGLU). We will also be participating to a H2020 project on human robot-interaction starting in 2016 (BabyRobot). We organised a workshop at ICML this year: Machine Learning for Interactive Systems (MLIS). Olivier Pietquin was invited as a panelist at the NIPS Workshop on spoken language understanding and dialogue.